Enterprise Database Systems
Hadoop Ecosystem
Data Factory with Hive
Data Factory with Oozie and Hue
Data Factory with Pig
Data Flow for the Hadoop Ecosystem
Data Refinery with YARN and MapReduce
Data Repository with Flume
Data Repository with HDFS and HBase
Data Repository with Sqoop
Ecosystem for Hadoop
Installation of Hadoop

Data Factory with Hive

Course Number:
df_ahec_a07_it_enus
Lesson Objectives

Data Factory with Hive

  • start the course
  • recall the key attributes of Hive
  • describe the configuration files
  • install and configure Hive
  • create a table in Derby using Hive
  • create a table in MySQL using Hive
  • recall the unique delimiter that Hive uses
  • describe the different operators in Hive
  • use basic SQL commands in Hive
  • use SELECT statements in Hive
  • use more complex HiveQL
  • write and use Hive scripts
  • recall what types of joins Hive can support
  • use Hive to perform joins
  • recall that a Hive partition schema must be created before loading the data
  • write a Hive partition script
  • recall how buckets are used to improve performance
  • create Hive buckets
  • recall some best practices for user defined functions
  • create a user defined function for Hive
  • recall the standard error code ranges and what they mean
  • use a Hive explain plan
  • understand configuration option, data loading and querying

Overview/Description
Apache Hadoop is a set of algorithms for distributed storage and distributed processing of Big Data on computer clusters built from commodity hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are commonplace and thus should be automatically handled in software by the framework. In this course, you'll explore Hive as a SQL like tool for interfacing with Hadoop. The course demonstrates the installation and configuration of Hive, followed by demonstration of Hive in action. Finally, you'll learn about extracting and loading data between Hive and a RDBMS.

Target Audience
Technical personnel with a background in Linux, SQL, and programming who intend to join a Hadoop Engineering team in roles such as Hadoop developer, data architect, or data engineer or roles related to technical project management, cluster operations, or data analysis

Data Factory with Oozie and Hue

Course Number:
df_ahec_a09_it_enus
Lesson Objectives

Data Factory with Oozie and Hue

  • start the course
  • describe metastore and hiveserver2
  • install and configure metastore
  • install and configure HiveServer2
  • describe HCatalog
  • install and configure WebHCat
  • use HCatalog to flow data
  • recall the Oozie terminology
  • recall the two categories of environmental variables for configuring Oozie
  • install Oozie
  • configure Oozie
  • configure Oozie to use MySQL
  • enable the Oozie Web Console
  • describe Oozie workflows
  • submit an Oozie workflow job
  • create an Oozie workflow
  • run an Oozie workflow job
  • describe Hue
  • recall the configuration files that must be edited
  • install Hue
  • configure the hue.ini file
  • install and configure Hue on MySQL
  • use the Hue File Browser and Job Scheduler
  • configure Hive daemons, Oozie, and Hue

Overview/Description
The Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. This course explains Oozie as a workflow tool used to manage multiple stage tasks in Hadoop. Additionally, you'll learn how to use Hue, a front end tool which is browser based.

Target Audience
Technical personnel with a background in Linux, SQL, and programming who intend to join a Hadoop Engineering team in roles such as Hadoop developer, data architect, or data engineer or roles related to technical project management, cluster operations, or data analysis

Data Factory with Pig

Course Number:
df_ahec_a08_it_enus
Lesson Objectives

Data Factory with Pig

  • start the course
  • describe Pig and its strengths
  • recall the minimal edits needed to be made to the configuration file
  • install and configure Pig
  • recall the complex data types used by Pig
  • recall some of the relational operators used by Pig
  • use the Grunt shell with Pig Latin
  • set parameters from both a text file and with the command line
  • write a Pig script
  • use a Pig script to filter data
  • use the FOREACH operator with a Pig script
  • set parameters and arguments in a Pig script
  • write a Pig script to count data
  • perform data joins using a Pig script
  • group data using a Pig script
  • cogroup data with a Pig script
  • flatten data using a pig script
  • recall the languages that can be used to write user defined functions
  • create a user defined function for Pig
  • recall the different types of error categories
  • use explain in a Pig script
  • install Pig, use Pig operators and Pig Latin, and retrieve and group records

Overview/Description
Hadoop is an open source software for affordable supercomputing. It provides the distributed file system and the parallel processing required to run a massive computing cluster. This course explains Pig as a data flow scripting tool for interfacing with Hadoop. You'll learn about the installation and configuration of Pig and explore a demonstration of Pig in action.

Target Audience
Technical personnel with a background in Linux, SQL, and programming who intend to join a Hadoop Engineering team in roles such as Hadoop developer, data architect, or data engineer or roles related to technical project management, cluster operations, or data analysis

Data Flow for the Hadoop Ecosystem

Course Number:
df_ahec_a10_it_enus
Lesson Objectives

Data Flow for the Hadoop Ecosystem

  • start the course
  • describe the data life cycle management
  • recall the parameters that must be set in the Sqoop import statement
  • create a table and load data into MySQL
  • use Sqoop to import data into Hive
  • recall the parameters that must be set in the Sqoop export statement
  • use Sqoop to export data from Hive
  • recall the three most common date datatypes and which systems support each
  • use casting to import datetime stamps into Hive
  • export datetime stamps from Hive into MySQL
  • describe dirty data and how it should be preprocessed
  • use Hive to create tables outside the warehouse
  • use pig to sample data
  • recall some other popular components for the Hadoop Ecosystem
  • recall some best practices for pseudo-mode implementation
  • write custom scripts to assist with administrative tasks
  • troubleshoot classpath errors
  • create complex configuration files
  • to use Sqoop and Hive for data flow and fusion in the Hadoop ecosystem

Overview/Description
Hadoop is a framework written in Java for running applications on large clusters of commodity hardware and incorporates features similar to those of the GFS and of the MapReduce computing paradigm. You'll explore a demonstration of the use of Sqoop and Hive with Hadoop to flow and fuse data. The demonstration includes preprocessing data, partitioning data and joining data.

Target Audience
Technical personnel with a background in Linux, SQL, and programming who intend to join a Hadoop Engineering team in roles such as Hadoop developer, data architect, or data engineer or roles related to technical project management, cluster operations, or data analysis

Data Refinery with YARN and MapReduce

Course Number:
df_ahec_a06_it_enus
Lesson Objectives

Data Refinery with YARN and MapReduce

  • start the course
  • describe parallel processing in the context of supercomputing
  • list the components of YARN and identify their primary functions
  • diagram YARN Resource Manager and identify its key components
  • diagram YARN Node Manager and identify its key components
  • diagram YARN ApplicationMaster and identify its key components
  • describe the operations of YARN
  • identify the standard configuration parameters to be changed for YARN
  • define the principle concepts of key-value pairs and list the rules for key-value pairs
  • describe how MapReduce transforms key-value pairs
  • load a large text book and then run WordCount to count the number of words in the text book
  • label all of the functions for MapReduce on a diagram
  • match the phases of MapReduce to their definitions
  • set up the classpath and test WordCount
  • build a JAR file and run WordCount
  • describe the base mapper class of the MapReduce Java API and describe how to override its methods
  • describe the base Reducer class of the MapReduce Java API and describe how to override its methods
  • describe the function of the MapReduceDriver Java class
  • set up the classpath and test a MapReduce job
  • identify the concept of streaming for MapReduce
  • stream a Python job
  • understand YARN features and components, as well as MapReduce and its classes

Overview/Description
The core of Hadoop consists of a storage part, HDFS, and a processing part, MapReduce. Hadoop splits files into large blocks and distributes the blocks amongst the nodes in the cluster. To process the data, Hadoop and MapReduce transfer code to nodes that have the required data, which the nodes then process in parallel. This approach takes advantage of data locality to allow the data to be processed faster and more efficiently via distributed processing than by using a more conventional supercomputer architecture that relies on a parallel file system where computation and data are connected via high-speed networking. In this course, you'll learn about the theory of YARN as a parallel processing framework for Hadoop. You'll also learn about the theory of MapReduce as the backbone of parallel processing jobs. Finally, this course demonstrates MapReduce in action by explaining the pertinent classes and then walk through a MapReduce program step by step.

Target Audience
Technical personnel with a background in Linux, SQL, and programming who intend to join a Hadoop Engineering team in roles such as Hadoop developer, data architect, or data engineer or roles related to technical project management, cluster operations, or data analysis

Data Repository with Flume

Course Number:
df_ahec_a04_it_enus
Lesson Objectives

Data Repository with Flume

  • start the course
  • describe the three key attributes of Flume
  • recall some of the protocols cURL supports
  • use cURL to download web server data
  • recall some best practices for the Agent Conf files
  • install and configure Flume
  • create a Flume agent
  • describe a flume agent in detail
  • use a flume agent to load data into HDFS
  • identify popular sources
  • identify popular sinks
  • describe Flume channels
  • describe what is happening during a file roll
  • recall that Avro can be used as both a sink and a source
  • use Avro to capture a remote file
  • create multiple-hop Flume agents
  • describe interceptors
  • create a Flume agent with a TimeStampInterceptor
  • describe multifunction Flume agents
  • configure Flume agents for mutliflow
  • create multi-source Flume agents
  • compare replicating to multiplexing
  • create a Flume agent for multiple data sinks
  • recall some common reasons for Flume failures
  • use the logger to troubleshoot Flume agents
  • configure the various Flume agents

Overview/Description
Hadoop is an open source software project that enables distributed processing of large data sets across clusters of commodity servers. It is designed to scale up from a single server to thousands of machines, with very high degree of fault tolerance. Rather than relying on high-end hardware, the resiliency of these clusters comes from the software's ability to detect and handle failures at the application layer. In this course, you'll learn about the theory of Flume as a tool for dealing with extraction and loading of unstructured data. You'll explore a detailed explanation of the Flume agents and a demonstration of the Flume agents in action.

Target Audience
Technical personnel with a background in Linux, SQL, and programming who intend to join a Hadoop Engineering team in roles such as Hadoop developer, data architect, or data engineer or roles related to technical project management, cluster operations, or data analysis

Data Repository with HDFS and HBase

Course Number:
df_ahec_a03_it_enus
Lesson Objectives

Data Repository with HDFS and HBase

  • start the course
  • configure the replication of data blocks
  • configure the default file system scheme and authority
  • describe the functions of the NameNode
  • recall how the NameNode operates
  • recall how the DataNode maintains data integrity
  • describe the purpose of the CheckPoint Node
  • describe the role of the Backup Node
  • recall the syntax of the file system shell commands
  • use shell commands to manage files
  • use shell commands to provide information about the file system
  • perform common administration functions
  • configure parameters for NameNode and DataNode
  • troubleshoot HDFS errors
  • describe key attributes of NoSQL databases
  • describe the roles of HBase and ZooKeeper
  • install and configure ZooKeeper
  • instause the HBase command line to create tables and insert datall and configure HBase
  • instause the HBase command line to create tables and insert datall and configure HBase
  • manage tables and view the web interface
  • create and change HBase data
  • provide a basic understanding of how Hadoop Distributed File System functions

Overview/Description
Hadoop is an open source Java framework for processing and querying vast amounts of data on large clusters of commodity hardware. It relies on an active community of contributors from all over the world for its success. In this course, you'll explore the server architecture for Hadoop and learn about the functions and configuration of the daemons making up the Hadoop Distributed File System. You'll also learn about the command line interface and common HDFS administration issues facing all end users. Finally, you'll explore the theory of HBase as another data repository built alongside or on top of HDFS, and basic HBase commands.

Target Audience
Technical personnel with a background in Linux, SQL, and programming who intend to join a Hadoop Engineering team in roles such as Hadoop developer, data architect, or data engineer or roles related to technical project management, cluster operations, or data analysis

Data Repository with Sqoop

Course Number:
df_ahec_a05_it_enus
Lesson Objectives

Data Repository with Sqoop

  • start the course
  • describe MySQL
  • install MySQL
  • create a database in MySQL
  • create MySQL tables and load data
  • describe Sqoop
  • describe Sqoop's architecture
  • recall the dependencies for Sqoop installation
  • install Sqoop
  • recall why it's important for the primary key to be numeric
  • perform a Sqoop import from MySQL into HDFS
  • recall what concerns the developers should be aware of
  • perform a Sqoop export from HDFS into MySQL
  • recall that you must execute a Sqoop import statement for each data element
  • perform a Sqoop import from MySQL into HBase
  • recall how to use chain troubleshooting to resolve Sqoop issues
  • use the log files to identify common Sqoop errors and their resolutions
  • to use Sqoop to extract data from a RDBMS and load the data into HDFS

Overview/Description
Hadoop is an open-source software framework for storing and processing big data in a distributed fashion on large clusters of commodity hardware. Essentially, it accomplishes two tasks: massive data storage and faster processing. This course explains the theory of Sqoop as a tool for dealing with extraction and loading of structured data from a RDBMS. You'll explore an explanation of Hive SQL statements and a demonstration of Hive in action.

Target Audience
Technical personnel with a background in Linux, SQL, and programming who intend to join a Hadoop Engineering team in roles such as Hadoop developer, data architect, or data engineer or roles related to technical project management, cluster operations, or data analysis

Ecosystem for Hadoop

Course Number:
df_ahec_a01_it_enus
Lesson Objectives

Ecosystem for Hadoop

  • start the course
  • describe supercomputing
  • recall three major functions of data analytics
  • define Big Data
  • describe the two different types of data
  • describe the components of the Big Data stack
  • identify the data repository components
  • identify the data refinery components
  • identify the data factory components
  • recall the design principles of Hadoop
  • describe the design principles of sharing nothing
  • describe the design principles of embracing failure
  • describe the components of the Hadoop Distributed File System (HDFS)
  • describe the four main HDFS daemons
  • describe Hadoop YARN
  • describe the roles of the Resource Manager daemon
  • describe the YARN NodeManager and ApplicationMaster daemons
  • define MapReduce and describe its relations to YARN
  • describe data analytics
  • describe the reasons for the complexities of the Hadoop Ecosystem
  • describe the components of the Hadoop ecosystem

Overview/Description
Hadoop's HDFS is a highly fault-tolerant distributed file system and, like Hadoop in general, designed to be deployed on low-cost hardware. It provides high throughput access to application data and is suitable for applications that have large data sets. This course examines the Hadoop ecosystem by demonstrating all of the commonly used open source software components. You'll explore a Big Data model to understand how these tools combine to create a supercomputing platform. You'll also learn how the principles of supercomputing apply to Hadoop and how this yields an affordable supercomputing environment.

Target Audience
Technical personnel with a background in Linux, SQL, and programming who intend to join a Hadoop Engineering team in roles such as Hadoop developer, data architect, or data engineer or roles related to technical project management, cluster operations, or data analysis

Installation of Hadoop

Course Number:
df_ahec_a02_it_enus
Lesson Objectives

Installation of Hadoop

  • start the course
  • recall the minimum system requirements for installation
  • configure the start-up shell and yum repositories
  • install the Java Developers Kit
  • setup SSH for Hadoop
  • recall why version 2.0 was significant
  • describe the three different installation modes
  • download and install Apache Hadoop
  • configure Hadoop environmental variables
  • configure Hadoop HDFS
  • start and stop Hadoop HDFS
  • configure Hadoop YARN and MapReduce
  • start and stop Hadoop YARN
  • validate the installation and configuration
  • recall the structure of the HDFS command
  • recall the importance of the output directory
  • run WordCount
  • recall the ports of the NameNode and Resource Manager Web UIs
  • use the NameNode and Resource Manager Web UIs
  • describe the best practices for changing configuration files
  • recall some of the most common errors and how to fix them
  • access Hadoop logs and troubleshoot Hadoop installation errors
  • to install and configure Hadoop and its associated components

Overview/Description
Apache Hadoop is an open source framework for distributed storage and processing of large sets of data on commodity hardware. Hadoop enables businesses to quickly gain insight from massive amounts of structured and unstructured data. In this course, you'll learn step-by-step instructions for installing Hadoop in a pseudo-mode and troubleshoot installation errors. You'll learn where the log files are and more about the architecture.

Target Audience
Technical personnel with a background in Linux, SQL, and programming who intend to join a Hadoop Engineering team in roles such as Hadoop developer, data architect, or data engineer or roles related to technical project management, cluster operations, or data analysis

Close Chat Live